-
-
Notifications
You must be signed in to change notification settings - Fork 4.5k
S3/ Minio integration #1125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
S3/ Minio integration #1125
Conversation
|
@kaumudpa is attempting to deploy a commit to the Listinai Team on Vercel. A member of the Team first needs to authorize it. |
89792d4 to
abbe492
Compare
3ffe15d to
2182ac0
Compare
2182ac0 to
344a5ba
Compare
344a5ba to
7477639
Compare
Add native support for AWS S3 and S3-compatible storage services (MinIO, DigitalOcean Spaces, Backblaze B2) as an alternative to existing local and Cloudflare R2 storage options. New features: - Generic 's3' storage provider using AWS SDK v3 - Support for custom endpoints (MinIO/self-hosted S3-compatible services) - Multipart upload support for large files (videos up to 1GB) - Date-based directory organization (YYYY/MM/DD/filename) - Automatic file deletion from storage when media is deleted Bug fixes included: - Enable actual file deletion from Cloudflare R2 (was no-op) - Add file deletion from local storage - Fix URL trailing slash handling for all providers - Add null checks and error handling for edge cases - Add fallback for unrecognized MIME types New environment variables: - S3_ENDPOINT (optional, for MinIO) - S3_ACCESS_KEY - S3_SECRET_KEY - S3_BUCKET - S3_REGION - S3_BUCKET_URL (optional) Fully backward compatible - existing local and cloudflare providers unchanged.
7477639 to
07ad35c
Compare
|
Closes #1125 |
|
@nevo-david Can we expect the PR's we create to get merged or atleast any feedback if you do not intend to merge. |
|
+1 - This would be great for self-hosted users behind Cloudflare tunnels who need to bypass the 100MB upload limit via multipart uploads to local MinIO! |
Changes SummaryThis PR introduces AWS S3 and MinIO (S3-compatible) storage provider support as a new media storage option alongside existing local and Cloudflare R2 providers. It adds three new S3-specific modules, integrates S3 into the upload factory pattern, and enables file deletion functionality across all storage providers. Type: feature Components Affected: Storage abstraction layer, Media upload/download pipeline, File deletion functionality, Backend API routes, Frontend upload component, Environment configuration Files Changed
Architecture Impact
Risk Areas: URL parsing and key extraction logic in s3.storage.ts removeFile() method - handles multiple URL formats (path-style, virtual-hosted, custom CDN) with fallback parsing, Multipart upload completion - Location header is replaced with constructed URL, depends on reliable key tracking, File deletion error handling - storage.removeFile() failures in media.service.ts are caught but logged; soft delete continues (could lead to orphaned files), Environment variable validation - S3 requires 4 mandatory variables; missing variables throw during factory instantiation, Date-based directory structure - generateDatePath() is duplicated across S3Storage and S3Uploader classes (DRY violation), Credentials in environment - S3_ACCESS_KEY and S3_SECRET_KEY are exposed as environment variables (standard but worth noting) Suggestions
Full review in progress... | Powered by diffray |
| @Res() res: Response, | ||
| @Param('endpoint') endpoint: string | ||
| ) { | ||
| const upload = await handleR2Upload(endpoint, req, res); | ||
| // Route to appropriate handler based on storage provider | ||
| let upload; | ||
| if (process.env.STORAGE_PROVIDER === 's3' && this.s3Uploader) { | ||
| upload = await this.s3Uploader.handleUpload(endpoint, req, res); | ||
| } else { | ||
| upload = await handleR2Upload(endpoint, req, res); | ||
| } | ||
|
|
||
| if (endpoint !== 'complete-multipart-upload') { | ||
| return upload; | ||
| } | ||
|
|
||
| // Check if response was already sent (e.g., error handler sent it) | ||
| if (res.headersSent) { | ||
| return; | ||
| } | ||
|
|
||
| // @ts-ignore - upload is CompleteMultipartUploadCommandOutput here | ||
| if (!upload?.Location) { | ||
| return res.status(500).json({ error: 'Upload failed - no location returned' }); | ||
| } | ||
| // @ts-ignore | ||
| const name = upload.Location.split('/').pop(); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟠 HIGH - Missing idempotency for multipart upload completion
Agent: bugs
Category: bug
Description:
The uploadFile endpoint saves media after multipart upload completes but lacks idempotency protection. Network failures after upload but before DB save could cause duplicate entries on retry.
Suggestion:
Implement idempotent file saving using uploadId as idempotency key, or add unique constraint on file URL/checksum.
Confidence: 70%
Rule: gen_double_charge_no_optimistic_lock
Review ID: bf12b8d4-b098-463e-a323-e769519b003d
Rate it 👍 or 👎 to improve future reviews | Powered by diffray
| async uploadSimple(path: string): Promise<string> { | ||
| const loadImage = await fetch(path); | ||
| const contentType = | ||
| loadImage?.headers?.get('content-type') || | ||
| loadImage?.headers?.get('Content-Type'); | ||
| // Fallback to 'bin' if MIME type is unrecognized, or try to extract from URL | ||
| const extension = getExtension(contentType) || path.split('.').pop()?.split('?')[0] || 'bin'; | ||
| const id = makeId(10); | ||
| const datePath = this.getDatePath(); | ||
| const key = `${datePath}/${id}.${extension}`; | ||
|
|
||
| const params = { | ||
| Bucket: this._bucketName, | ||
| Key: key, | ||
| Body: Buffer.from(await loadImage.arrayBuffer()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟠 HIGH - Unhandled HTTP error status in uploadSimple()
Agent: bugs
Category: bug
Description:
fetch() does not throw on HTTP error statuses (4xx/5xx). The code proceeds to call arrayBuffer() without checking response.ok, potentially processing error responses as valid data.
Suggestion:
Check response.ok before processing: if (!loadImage.ok) throw new Error(HTTP ${loadImage.status}: ${loadImage.statusText});
Confidence: 95%
Rule: bug_missing_try_catch
Review ID: bf12b8d4-b098-463e-a323-e769519b003d
Rate it 👍 or 👎 to improve future reviews | Powered by diffray
| const command = new PutObjectCommand({ | ||
| Bucket: this.bucketName, | ||
| Key: key, | ||
| Body: data, | ||
| ContentType: contentType, | ||
| ACL: 'public-read', | ||
| }); | ||
|
|
||
| await this.client.send(command); | ||
| return this.bucketUrl + '/' + key; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 MEDIUM - S3 uploads lack explicit server-side encryption
Agent: compliance
Category: security
Description:
PutObjectCommand does not include ServerSideEncryption field. Files rely on bucket-level default encryption settings if any. Explicit encryption ensures compliance regardless of bucket config.
Suggestion:
Add ServerSideEncryption: 'AES256' or 'aws:kms' to PutObjectCommand parameters.
Confidence: 75%
Rule: soc2_encrypt_data_at_rest
Review ID: bf12b8d4-b098-463e-a323-e769519b003d
Rate it 👍 or 👎 to improve future reviews | Powered by diffray
| export function createS3Uploader(): S3Uploader { | ||
| const required = ['S3_ACCESS_KEY', 'S3_SECRET_KEY', 'S3_REGION', 'S3_BUCKET']; | ||
| const missing = required.filter((v) => !process.env[v]); | ||
| if (missing.length > 0) { | ||
| throw new Error( | ||
| `Missing required environment variables for S3 uploader: ${missing.join(', ')}` | ||
| ); | ||
| } | ||
|
|
||
| return new S3Uploader({ | ||
| endpoint: process.env.S3_ENDPOINT || undefined, | ||
| accessKey: process.env.S3_ACCESS_KEY!, | ||
| secretKey: process.env.S3_SECRET_KEY!, | ||
| region: process.env.S3_REGION!, | ||
| bucketName: process.env.S3_BUCKET!, | ||
| bucketUrl: process.env.S3_BUCKET_URL || undefined, | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 MEDIUM - S3 credentials from environment variables
Agent: compliance
Category: security
Description:
createS3Uploader() loads AWS credentials from process.env. This is a common pattern but lacks secrets manager integration for credential rotation.
Suggestion:
Consider AWS Secrets Manager integration if compliance requirements mandate credential rotation.
Confidence: 60%
Rule: soc2_secrets_manager_rotation
Review ID: bf12b8d4-b098-463e-a323-e769519b003d
Rate it 👍 or 👎 to improve future reviews | Powered by diffray
| @Delete('/:id') | ||
| deleteMedia(@GetOrgFromRequest() org: Organization, @Param('id') id: string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 MEDIUM - No audit logging for media deletion
Agent: compliance
Category: security
Description:
The deleteMedia endpoint directly calls _mediaService.deleteMedia() without audit logging. No record of who deleted what media or when.
Suggestion:
Add audit logging before deletion: log actor (user/org ID), action, resource ID, timestamp.
Confidence: 80%
Rule: compliance_audit_trail
Review ID: bf12b8d4-b098-463e-a323-e769519b003d
Rate it 👍 or 👎 to improve future reviews | Powered by diffray
| ACL: 'public-read', | ||
| }); | ||
|
|
||
| await this.client.send(command); | ||
| return this.bucketUrl + '/' + key; | ||
| } | ||
|
|
||
| private async createMultipartUpload(req: Request, res: Response) { | ||
| const { file, fileHash, contentType } = req.body; | ||
| if (!file?.name) { | ||
| return res.status(400).json({ error: 'Missing file name' }); | ||
| } | ||
| const fileExtension = path.extname(file.name); | ||
| const randomFilename = this.generateRandomString() + fileExtension; | ||
| const datePath = this.getDatePath(); | ||
| const key = `${datePath}/${randomFilename}`; | ||
|
|
||
| try { | ||
| const command = new CreateMultipartUploadCommand({ | ||
| Bucket: this.bucketName, | ||
| Key: key, | ||
| ContentType: contentType, | ||
| ACL: 'public-read', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟠 HIGH - Public read ACL hardcoded for all S3 uploads
Agent: security
Category: security
Description:
All S3 uploads use hardcoded ACL: 'public-read' (lines 109 and 131). Every uploaded file is publicly accessible without authentication, which may not be appropriate for all content types.
Suggestion:
Make ACL configurable based on file type or use case. Consider 'private' as default with signed URLs for access.
Confidence: 75%
Rule: sec_access_control
Review ID: bf12b8d4-b098-463e-a323-e769519b003d
Rate it 👍 or 👎 to improve future reviews | Powered by diffray
| ACL: 'public-read' as const, | ||
| }; | ||
|
|
||
| const command = new PutObjectCommand(params); | ||
| await this._client.send(command); | ||
|
|
||
| return `${this._bucketUrl}/${key}`; | ||
| } | ||
|
|
||
| async uploadFile(file: Express.Multer.File): Promise<any> { | ||
| try { | ||
| const id = makeId(10); | ||
| const extension = mime.extension(file.mimetype) || ''; | ||
| const datePath = this.getDatePath(); | ||
| const key = `${datePath}/${id}.${extension}`; | ||
|
|
||
| const command = new PutObjectCommand({ | ||
| Bucket: this._bucketName, | ||
| ACL: 'public-read', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟠 HIGH - Public ACL hardcoded for all S3 storage uploads
Agent: security
Category: security
Description:
S3Storage uses hardcoded ACL: 'public-read' in both uploadSimple (line 87) and uploadFile (line 105). All files are world-readable.
Suggestion:
Make ACL configurable. Use 'private' ACL as default and implement signed URLs for controlled access.
Confidence: 75%
Rule: sec_access_control
Review ID: bf12b8d4-b098-463e-a323-e769519b003d
Rate it 👍 or 👎 to improve future reviews | Powered by diffray
| // @ts-ignore - upload is CompleteMultipartUploadCommandOutput here | ||
| if (!upload?.Location) { | ||
| return res.status(500).json({ error: 'Upload failed - no location returned' }); | ||
| } | ||
| // @ts-ignore | ||
| const name = upload.Location.split('/').pop(); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 MEDIUM - Unsafe type casting for response object
Agent: react
Category: quality
Description:
Multiple @ts-ignore comments (lines 181, 185, 191) bypass TypeScript checks when accessing upload.Location. This indicates improper typing of the CompleteMultipartUploadCommandOutput response.
Suggestion:
Define proper types for the upload response and use type guards: if (upload && 'Location' in upload) { ... }
Confidence: 80%
Rule: ts_classify_http_errors_with_type_safe_narr
Review ID: bf12b8d4-b098-463e-a323-e769519b003d
Rate it 👍 or 👎 to improve future reviews | Powered by diffray
| async removeFile(filePath: string): Promise<void> { | ||
| // Extract the key from the full URL | ||
| // URL formats: | ||
| // - Path-style (MinIO): https://minio.example.com/bucket/2025/12/16/abc123.png | ||
| // - Virtual-hosted (AWS): https://bucket.s3.region.amazonaws.com/2025/12/16/abc123.png | ||
| // - Custom bucket URL: https://cdn.example.com/2025/12/16/abc123.png | ||
| let key = ''; | ||
|
|
||
| // Primary: Check if URL starts with configured bucket URL | ||
| if (filePath.startsWith(this._bucketUrl)) { | ||
| key = filePath.substring(this._bucketUrl.length).replace(/^\//, ''); | ||
| } else { | ||
| // Fallback: Try to parse URL and extract path after bucket name | ||
| try { | ||
| const url = new URL(filePath); | ||
| const pathParts = url.pathname.split('/').filter(Boolean); | ||
|
|
||
| // Check if first part is the bucket name (path-style URLs) | ||
| if (pathParts[0] === this._bucketName) { | ||
| key = pathParts.slice(1).join('/'); | ||
| } else { | ||
| // Virtual-hosted style or custom URL - entire path is the key | ||
| key = pathParts.join('/'); | ||
| } | ||
| } catch { | ||
| // Not a valid URL, use as-is (might be just a key) | ||
| key = filePath; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 MEDIUM - File deletion without ownership validation
Agent: security
Category: security
Description:
The removeFile method accepts URLs and extracts S3 keys. While it validates the URL structure (lines 140-158), there's no ownership check to verify the file belongs to the requesting user/organization.
Suggestion:
Implement ownership checks at the service layer to verify file path matches expected org/user prefixes before deletion.
Confidence: 70%
Rule: sec_access_control
Review ID: bf12b8d4-b098-463e-a323-e769519b003d
Rate it 👍 or 👎 to improve future reviews | Powered by diffray
| @Res() res: Response, | ||
| @Param('endpoint') endpoint: string | ||
| ) { | ||
| const upload = await handleR2Upload(endpoint, req, res); | ||
| // Route to appropriate handler based on storage provider | ||
| let upload; | ||
| if (process.env.STORAGE_PROVIDER === 's3' && this.s3Uploader) { | ||
| upload = await this.s3Uploader.handleUpload(endpoint, req, res); | ||
| } else { | ||
| upload = await handleR2Upload(endpoint, req, res); | ||
| } | ||
|
|
||
| if (endpoint !== 'complete-multipart-upload') { | ||
| return upload; | ||
| } | ||
|
|
||
| // Check if response was already sent (e.g., error handler sent it) | ||
| if (res.headersSent) { | ||
| return; | ||
| } | ||
|
|
||
| // @ts-ignore - upload is CompleteMultipartUploadCommandOutput here | ||
| if (!upload?.Location) { | ||
| return res.status(500).json({ error: 'Upload failed - no location returned' }); | ||
| } | ||
| // @ts-ignore | ||
| const name = upload.Location.split('/').pop(); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 MEDIUM - Missing validation in multipart upload context
Agent: security
Category: security
Description:
The uploadFile endpoint routes multipart upload requests (line 157-170) based on endpoint parameter. While org is retrieved via @GetOrgFromRequest(), there's no validation that the uploadId/key belongs to the authenticated organization.
Suggestion:
Validate that multipart upload context (uploadId, key) belongs to the authenticated organization before processing.
Confidence: 70%
Rule: sec_broken_access_control
Review ID: bf12b8d4-b098-463e-a323-e769519b003d
Rate it 👍 or 👎 to improve future reviews | Powered by diffray
Review Summary
Validated 95 issues: 63 kept, 32 filtered Issues Found: 63💬 See 44 individual line comment(s) for details. 📊 34 unique issue type(s) across 63 location(s) 📋 Full issue list (click to expand)🔴 CRITICAL - Sensitive error objects exposed in loggingAgent: security Category: security Why this matters: AWS SDK errors can contain request IDs, account info, and operational details that could aid attackers. File: Description: Full AWS SDK error objects logged without sanitization at multiple lines (143, 170, 190, 211, 228, 252). These errors may contain sensitive request details, credentials, or internal operational information. Suggestion: Implement error sanitization. Log only error.code and error.message. Never log full error objects in production: console.log('S3 error:', { code: err.code, message: err.message }) Confidence: 85% Rule: 🔴 CRITICAL - Unhandled promise rejection in fetch().json()Agent: bugs Category: bug Why this matters: Unhandled rejections in Promise.all will fail silently or crash the operation without proper user feedback. File: Description: fetch() and .json() calls inside Promise.all have no error handling. Network failures or invalid JSON will cause unhandled promise rejection. Suggestion: Wrap in try-catch: try { const res = await fetch(...); if (!res.ok) throw new Error(...); return res.json(); } catch (error) { ... } Confidence: 85% Rule: 🟠 HIGH - Hardcoded dependencies in controller via factory (3 occurrences)Agent: architecture Category: quality Why this matters: Hardcoded dependencies make unit testing difficult and create tight coupling. 📍 View all locations
Rule: 🟠 HIGH - Business logic in controller instead of service (2 occurrences)Agent: architecture Category: quality Why this matters: Controllers should be thin; business logic belongs in services for testability. 📍 View all locations
Rule: 🟠 HIGH - Missing lang attribute on html element (3 occurrences)Agent: accessibility Category: accessibility Why this matters: Screen readers rely on lang attribute to use correct pronunciation rules. 📍 View all locations
Rule: 🟠 HIGH - Missing idempotency for multipart upload completion (3 occurrences)Agent: bugs Category: bug 📍 View all locations
Rule: 🟠 HIGH - Module-level S3 client reads env vars at import timeAgent: architecture Category: quality File: Description: The R2 client is instantiated at module level, reading process.env variables during import. Missing env vars cause module load failures. Also makes testing difficult. Suggestion: Move client creation into a factory function or service class with lazy initialization and dependency injection support. Confidence: 85% Rule: 🟠 HIGH - Unhandled HTTP error status in uploadSimple()Agent: bugs Category: bug File: Description: fetch() does not throw on HTTP error statuses (4xx/5xx). The code proceeds to call arrayBuffer() without checking response.ok, potentially processing error responses as valid data. Suggestion: Check response.ok before processing: if (!loadImage.ok) throw new Error( Confidence: 95% Rule: 🟠 HIGH - Blocking mkdirSync in async functionAgent: performance Category: performance File: Description: mkdirSync blocks the event loop during async uploadSimple/uploadFile operations. With concurrent uploads, this degrades server performance. Suggestion: Replace with await mkdir(dir, { recursive: true }) from fs/promises. Confidence: 90% Rule: 🟠 HIGH - Sequential presigned URL generation in loop (2 occurrences)Agent: performance Category: performance 📍 View all locations
Rule: 🟠 HIGH - Potential undefined response.Parts access (5 occurrences)Agent: bugs Category: bug 📍 View all locations
Rule: 🟠 HIGH - File size validation inconsistent between frontend and backend (2 occurrences)Agent: consistency Category: quality 📍 View all locations
Rule: 🟠 HIGH - Unbounded Promise.all with user-controlled file countAgent: performance Category: performance File: Description: Promise.all processes all file uploads in parallel without concurrency limits. Users can upload many files causing resource exhaustion. Suggestion: Add concurrency limiting using p-limit or implement batching. Limit to 5-10 concurrent requests: import pLimit from 'p-limit'; const limit = pLimit(5); await Promise.all(toSave.map(name => limit(() => fetch(...)))); Confidence: 82% Rule: 🟠 HIGH - Path traversal vulnerability in removeFileAgent: security Category: security File: Description: The removeFile method at line 107 accepts arbitrary paths in the catch block fallback. If URL parsing fails, the raw filePath is passed directly to unlink() without validation, allowing potential path traversal attacks. Suggestion: Validate that resolved path is within uploadDirectory using path.resolve() and startsWith() check before calling unlink. Never accept arbitrary paths without validation. Confidence: 85% Rule: 🟠 HIGH - Unhandled async operations in event listener (3 occurrences)Agent: react Category: bug 📍 View all locations
Rule: 🟠 HIGH - Public read ACL hardcoded for all S3 uploads (4 occurrences)Agent: security Category: security 📍 View all locations
Rule: 🟡 MEDIUM - Duplicated S3Client initialization logic (2 occurrences)Agent: react Category: quality Why this matters: Reduces maintenance cost and prevents logical drift between implementations. 📍 View all locations
Rule: 🟡 MEDIUM - Monolithic file with mixed responsibilities (4 occurrences)Agent: architecture Category: quality Why this matters: Functional code with shared state is harder to test and maintain than class-based encapsulation. 📍 View all locations
Rule: 🟡 MEDIUM - Cloudflare credentials from env without rotation mechanism (2 occurrences)Agent: compliance Category: security 📍 View all locations
Rule: 🟡 MEDIUM - Hardcoded user-facing error messagesAgent: accessibility Category: accessibility File: Description: Error messages use hardcoded strings instead of the translation system. The useT() hook is imported (line 12) and used elsewhere (line 297) but not for these error messages. Suggestion: Use the useT() hook for error messages: toast.show(t('upload', 'filetype-not-allowed', { fileType, allowedTypes })) Confidence: 90% Rule: 🟡 MEDIUM - S3 uploads lack explicit server-side encryptionAgent: compliance Category: security File: Description: PutObjectCommand does not include ServerSideEncryption field. Files rely on bucket-level default encryption settings if any. Explicit encryption ensures compliance regardless of bucket config. Suggestion: Add ServerSideEncryption: 'AES256' or 'aws:kms' to PutObjectCommand parameters. Confidence: 75% Rule: 🟡 MEDIUM - No audit logging for media deletion (2 occurrences)Agent: compliance Category: security 📍 View all locations
Rule: 🟡 MEDIUM - Raw error objects logged to consoleAgent: compliance Category: security File: Description: Error objects from S3 API calls are logged directly without filtering. Error objects may contain sensitive information like request IDs or bucket names. Suggestion: Implement structured logging with error sanitization: console.error('S3 error', { message: err.message, code: err.code }) Confidence: 75% Rule: 🟡 MEDIUM - Duplicate getDatePath method (3 occurrences)Agent: consistency Category: quality 📍 View all locations
Rule: 🟡 MEDIUM - Debug console.log in production codeAgent: quality Category: quality File: Description: console.log('hello') appears to be leftover debug code with no diagnostic value. Suggestion: Remove this debugging statement entirely. Confidence: 100% Rule: 🟡 MEDIUM - No authorization check in updateIntegration method (3 occurrences)Agent: compliance Category: security 📍 View all locations
Rule: 🟡 MEDIUM - Duplicate S3 configuration interfaceAgent: consistency Category: quality File: Description: S3StorageConfig and S3UploaderConfig have identical fields. s3.utils.ts already defines a shared S3Config interface that could be extended. Suggestion: Have S3StorageConfig and S3UploaderConfig extend the S3Config interface from s3.utils.ts, adding only accessKey/secretKey. Confidence: 80% Rule: 🟡 MEDIUM - Hardcoded file size limits should be configurable (2 occurrences)Agent: general Category: style 📍 View all locations
Rule: 🟡 MEDIUM - Unsafe type casting for response objectAgent: react Category: quality File: Description: Multiple @ts-ignore comments (lines 181, 185, 191) bypass TypeScript checks when accessing upload.Location. This indicates improper typing of the CompleteMultipartUploadCommandOutput response. Suggestion: Define proper types for the upload response and use type guards: Confidence: 80% Rule: 🟡 MEDIUM - Missing dependency in useCallbackAgent: react Category: quality File: Description: The useCallback at line 29 has [onUploadSuccess] in dependencies but the original claim was empty array. Verified: line 34 shows Suggestion: No action needed - dependency array is correct. Confidence: 20% Rule: 🟡 MEDIUM - Missing validation in multipart upload contextAgent: security Category: security File: Description: The uploadFile endpoint routes multipart upload requests (line 157-170) based on endpoint parameter. While org is retrieved via @GetOrgFromRequest(), there's no validation that the uploadId/key belongs to the authenticated organization. Suggestion: Validate that multipart upload context (uploadId, key) belongs to the authenticated organization before processing. Confidence: 70% Rule: 🔵 LOW - uppy.log() calls without dev-only checksAgent: accessibility Category: quality File: Description: Multiple uppy2.log() and uppy2.info() calls throughout the preprocessor code. These are Uppy's built-in logging methods which may be acceptable, but could be wrapped in development checks. Suggestion: Wrap in development check if logging should be suppressed in production: if (process.env.NODE_ENV === 'development') { uppy2.log(...); } Confidence: 65% Rule: 🔵 LOW - Magic number 32 in filename generationAgent: quality Category: quality File: Description: The number 32 is used without explanation for generating random filenames. Suggestion: Extract to named constant: private readonly RANDOM_FILENAME_LENGTH = 32; Confidence: 70% Rule: 🔵 LOW - Global window assignment with @ts-ignoreAgent: react Category: security File: Description: The code assigns application configuration to window.vars (line 60) using @ts-ignore (line 59). This exposes environment variables and configuration globally, which could leak sensitive information to browser console. Suggestion: Review what data is exposed via otherProps and ensure no sensitive information is included. Consider using a more controlled state access pattern. Confidence: 60% Rule: ℹ️ 19 issue(s) outside PR diff (click to expand)
🔴 CRITICAL - Unhandled promise rejection in fetch().json()Agent: bugs Category: bug Why this matters: Unhandled rejections in Promise.all will fail silently or crash the operation without proper user feedback. File: Description: fetch() and .json() calls inside Promise.all have no error handling. Network failures or invalid JSON will cause unhandled promise rejection. Suggestion: Wrap in try-catch: try { const res = await fetch(...); if (!res.ok) throw new Error(...); return res.json(); } catch (error) { ... } Confidence: 85% Rule: 🟠 HIGH - Module-level S3 client reads env vars at import timeAgent: architecture Category: quality File: Description: The R2 client is instantiated at module level, reading process.env variables during import. Missing env vars cause module load failures. Also makes testing difficult. Suggestion: Move client creation into a factory function or service class with lazy initialization and dependency injection support. Confidence: 85% Rule: 🟠 HIGH - Potential undefined from split().pop()Agent: bugs Category: bug File: Description: Array.pop() on file variable can return undefined if file URL has no separators or is empty, passed directly to saveFile. Suggestion: Use nullish coalescing: const name = file.split('/').pop() ?? 'unknown'; Confidence: 85% Rule: 🟠 HIGH - File size validation inconsistent between frontend and backendAgent: consistency Category: quality File: Description: Frontend allows 30MB images (line 140) but backend only accepts 10MB (custom.upload.validation.ts line 39). Users will see confusing rejection errors. Suggestion: Align limits: either increase backend to 30MB or reduce frontend to 10MB. Extract to shared constants file. Confidence: 95% Rule: 🟠 HIGH - Sequential database updates in loop - N+1 patternAgent: performance Category: performance File: Description: disableIntegrations updates channels one-by-one in a loop. For N channels, this executes N individual UPDATE queries. Suggestion: Use updateMany: await this._integration.model.integration.updateMany({ where: { id: { in: getChannels.map(c => c.id) } }, data: { disabled: true } }); Confidence: 92% Rule: 🟠 HIGH - Unhandled promise rejection in fetch callAgent: react Category: bug File: Description: The fetchUploadApiEndpoint function calls fetch without try-catch and doesn't check response.ok before calling .json(). Network errors and error HTTP responses will cause unhandled rejections. Suggestion: Add error handling: Confidence: 90% Rule: 🟠 HIGH - Public ACL hardcoded for Cloudflare R2 uploadsAgent: security Category: security File: Description: Cloudflare R2 uploads use hardcoded ACL: 'public-read' (line 95). All uploaded files are publicly accessible. Suggestion: Make ACL configurable. Consider 'private' as default with signed URLs for public sharing. Confidence: 75% Rule: 🟡 MEDIUM - Duplicated S3Client initialization logicAgent: react Category: quality Why this matters: Reduces maintenance cost and prevents logical drift between implementations. File: Description: S3Client initialization with Cloudflare credentials is duplicated between r2.uploader.ts and cloudflare.storage.ts with similar endpoint and credential setup. Suggestion: Create a shared function: export function createCloudflareS3Client(): S3Client in a utils file to avoid duplication. Confidence: 75% Rule: 🟡 MEDIUM - Monolithic file with mixed responsibilitiesAgent: architecture Category: quality Why this matters: Functional code with shared state is harder to test and maintain than class-based encapsulation. File: Description: File exports a default handler function plus multiple standalone functions sharing module-level R2 client singleton, mixing concerns of HTTP handling and S3 operations. Suggestion: Create an R2Uploader class similar to S3Uploader that encapsulates all operations and the client instance. Confidence: 70% Rule: 🟡 MEDIUM - Cloudflare credentials from env without rotation mechanismAgent: compliance Category: security File: Description: R2 uploader loads Cloudflare credentials directly from process.env without secrets manager integration. No credential rotation mechanism is implemented. Suggestion: Consider using secrets manager for credential storage and implement credential rotation mechanism if compliance requirements demand it. Confidence: 65% Rule: 🟡 MEDIUM - Hardcoded user-facing error messagesAgent: accessibility Category: accessibility File: Description: Error messages use hardcoded strings instead of the translation system. The useT() hook is imported (line 12) and used elsewhere (line 297) but not for these error messages. Suggestion: Use the useT() hook for error messages: toast.show(t('upload', 'filetype-not-allowed', { fileType, allowedTypes })) Confidence: 90% Rule: 🟡 MEDIUM - useEffect missing dependency - stale window.vars (2 occurrences)Agent: bugs Category: bug 📍 View all locations
Rule: 🟡 MEDIUM - Missing accessibility attributes on DashboardAgent: accessibility Category: accessibility Why this matters: File upload interfaces should be accessible to screen reader users. However, Uppy Dashboard may have its own accessibility features. File: Description: The Uppy Dashboard component wrapper div has class 'pointer-events-none' but lacks ARIA labels for screen reader users. Suggestion: Add aria-label to the wrapping div: Confidence: 65% Rule: 🟡 MEDIUM - Token updates without role-based authorizationAgent: compliance Category: security File: Description: createOrUpdateIntegration upserts sensitive token data. While org ID is checked via composite key, no role verification for token operations. Suggestion: Consider adding role check for token modifications and audit logging for all token changes. Confidence: 65% Rule: 🟡 MEDIUM - Hardcoded file size limits should be configurableAgent: general Category: style File: Description: Image and video size limits are hardcoded. These should be environment-configurable for different deployment scenarios. Suggestion: Move limits to environment variables passed via VariableContext: MAX_IMAGE_SIZE_MB and MAX_VIDEO_SIZE_MB with defaults. Confidence: 70% Rule: 🟡 MEDIUM - Missing dependency in useCallbackAgent: react Category: quality File: Description: The useCallback at line 29 has [onUploadSuccess] in dependencies but the original claim was empty array. Verified: line 34 shows Suggestion: No action needed - dependency array is correct. Confidence: 20% Rule: 🔵 LOW - uppy.log() calls without dev-only checksAgent: accessibility Category: quality File: Description: Multiple uppy2.log() and uppy2.info() calls throughout the preprocessor code. These are Uppy's built-in logging methods which may be acceptable, but could be wrapped in development checks. Suggestion: Wrap in development check if logging should be suppressed in production: if (process.env.NODE_ENV === 'development') { uppy2.log(...); } Confidence: 65% Rule: 🔵 LOW - Global window assignment with @ts-ignoreAgent: react Category: security File: Description: The code assigns application configuration to window.vars (line 60) using @ts-ignore (line 59). This exposes environment variables and configuration globally, which could leak sensitive information to browser console. Suggestion: Review what data is exposed via otherProps and ensure no sensitive information is included. Consider using a more controlled state access pattern. Confidence: 60% Rule: Review ID: |
|
Any update on this? |
What kind of change does this PR introduce?
Feature: Add AWS S3 and MinIO (S3-compatible) storage provider support
This PR introduces a new storage provider option (
s3) that allows users to store media files on AWS S3 or any S3-compatible service like MinIO, DigitalOcean Spaces, or Backblaze B2.Why was this change needed?
Fixes #1124
Fixes #1119
Fixes #996
Currently, Postiz only supports two storage options:
Many self-hosted users have requested S3 support because:
Existing Infrastructure: Users already have AWS S3 buckets or MinIO deployments and don't want to create a separate Cloudflare account just for Postiz.
Cost Optimization: AWS S3 may be more cost-effective for users already in the AWS ecosystem, and MinIO provides completely self-hosted object storage with zero external dependencies.
Data Sovereignty: Self-hosted MinIO allows users to keep all media on their own infrastructure - critical for GDPR compliance, enterprise security requirements, and air-gapped environments.
Vendor Neutrality: Adding S3 support gives users more flexibility in choosing their storage provider.
Changes Made
New Files
libraries/nestjs-libraries/src/upload/s3.storage.tsIUploadProviderlibraries/nestjs-libraries/src/upload/s3.uploader.tslibraries/nestjs-libraries/src/upload/s3.utils.tsModified Files
upload.factory.ts's3'case to factory patternmedia.controller.tsSTORAGE_PROVIDERmedia.service.tsstorage.removeFile()call when deleting mediauppy.upload.ts's3'case (uses sameAwsS3Multipartplugin)variable.context.tsx's3'to storage provider typecloudflare.storage.tsremoveFile()(was no-op), fixed URL trailing slashr2.uploader.tslocal.storage.ts.env.exampleNew Environment Variables
Backward Compatibility
This change is 100% backward compatible:
STORAGE_PROVIDER === 's3'localandcloudflareproviders work exactly as beforeOther information:
Configuration Examples
AWS S3:
MinIO (self-hosted):
MinIO Bucket Configuration
MinIO users need to enable public read access on their bucket:
mc anonymous set download myminio/bucket-nameThis can be automated in docker-compose with an init container.
Testing Performed
YYYY/MM/DD/filename)Future Considerations
Checklist:
Put a "X" in the boxes below to indicate you have followed the checklist;